Author
|
Topic: Moment of Truth/Nick Savastano
|
Taylor Member
|
posted 01-23-2008 09:36 AM
On AP they posted that Savastano is the polygraph examiner conducting the tests on the Moment of Truth (starts tonight). The also said this is the guy that was running tests on 'Meet the folks' (or what ever it was called). Unless exams were conducted off screen on that show my faith is dwindling. On the Meet the Folks show he would point to a parent to ask surprise questions....the examinee was continually moving etc. Does anyone know this man?IP: Logged |
ckieso Member
|
posted 01-23-2008 09:44 AM
I "googled" his name and he has a website, polygraphtest.com, that has all of his information. Apparantly he has conducted exams and/or been a consultant on several different shows. IP: Logged |
Buster Member
|
posted 01-23-2008 09:44 AM
I posted about him before. I think someone posted on here that he conducted the tests on "Meet the Folks" ahead of time and then it was a reenacted for the show. Don't quote me on that.I watched a marathon around Christmas of the other show (Folks) and they showed reactions on the chart and they were clearly fake. There is an article in this months APA journal about "Moment of Truth", but I can't recall the facts except that APA head honchos were publicly denouncing it. IP: Logged |
rnelson Member
|
posted 01-23-2008 10:06 AM
Quick glance at polygraphtest.com...Most of the info seems benign, but this might prompt some response from anti- quote: When operating properly, the polygraph instrument is 100% accurate in that it performs the tasks for which it is designed. What is the accuracy rate of a polygraph examination? The polygraph instrument itself is 100% accurate. In proper operating condition, the polygraph does exactly what it is designed to do -- measure and record human physiology. As is the case in all matters, when the human element is added, examiner and examinee, variables are introduced and the rate of accuracy is slightly reduced. Reliable, impartial research on polygraph examinations, consistently show an accuracy rate of 97%. Therefore, 97 of 100 examinations conducted result in an accurate analysis. The remaining 3% consists mostly of inconclusive or indeterminate results and occasionally, false positive and false negative results. Inconclusive results, means that there is not enough measureable data on the charts to make a determination of truth or deception. False positive results means that the examinee was truthful but was reported to be deceptive. False negative means the examinee was deceptive but reported as truthful.
IP: Logged |
stat Member
|
posted 01-23-2008 10:10 AM
1st, we all can agree that the new show will gain huge ratings/viewers. Second, more people will inquire about polygraph out of idle curiosity---and so they will turn to the internet for answers. They will likely read antipolygraph first, as it is the most dynamic, entertaining, and colorful site to see----along with the search engine priority, allowing queries to be brought straight to antipolygraph.org. Having said that, I posted a remark condemning the entertainment polygraph shows as distasteful, unethical, and dispicable (exact words?). So Sarge asks (fairly) if other "scientific testing" is acceptable on television, than why is polygraph different? I mulled that quetsion over for a while, and I really have a hard time answering that. The answer in my opinion is ugly----that there is no standardization for domestic testing, and that test errors can ruin lives. Well, how does that differ in any other setting, as the test during erronious output ruins lives also. Crap. Does any silver-tongued type have a more concrete and inarguable explanation as to just why TV testing is unethical, not including the obvious APA antisentiments toward "testing for entertainment reasons?"Time for some forethought, wisdom, and leadership.
[This message has been edited by stat (edited 01-23-2008).] IP: Logged |
Barry C Member
|
posted 01-23-2008 11:16 AM
The answer is math. I've got to figure out how to do it correctly though.The people are asked 50 to 75 questions up front, so if all are single issue tests (no way), then you've got about a 90% chance of getting all correct, which means 5 will be errors. The question becomes, "What is the chance of picking 21 questions from those 50 without getting any errors?" In other words, I've got to answer a question truthfully, and the examiner has to find it truthful, but he's not going to do that every time since the test isn't perfect - and this one is far from perfect given the bad questions. That's where I'm not sure of the math (combining the probabilites, if appropriate). Anyhow, here goes: With the first question (asked on TV), you've got 45 out of 50 chance of asking a question the examiner got right, which means 90%. Then, you've got a 44 out of 49 chance of asking a question the examiner got right. When you get down to the 21st question, you've got 25 chances out of 30 remaining questions to ask a question the examiner got correct, which means a 16% chance of error. I don't know what the combined chances are though. (Hello Ray or Lou!) If you run a test that is 70% accurate, then you've got a 30% chance of error on question 21 (and again, all other questions were correct), which is not good. If you run a test that is 50% accurate, which is probably the case in this scenario - unless they are testing each person over a two week period - then the chance a question 21 being a good one (one to which the examinee told the truth and the examiner concluded truth) is only 17%, which means an 83% chance of error. My point: you almost certainly can't win if you tell the truth. I have ethical problems with the questions, let alone what this process might do to the contestants. IP: Logged |
Buster Member
|
posted 01-23-2008 11:47 AM
Heres a nice shot the New York Sun took at us today in their article reviewing tonights show: "Though long ago debunked as a reliable adjudicator of truth, the polygraph, which measures changes in heart rate, skin conductivity, breathing, and blood pressure, is a reliable documenter of unease." Ah, I appreciate that... IP: Logged |
derekp Member
|
posted 01-23-2008 11:51 AM
Hello all,I have been reading this forum and learning a lot the last year or so. I have never posted before and was never POed enough unitl now. I had a converstion ealrier with other detectives here about the show and they asked who the examiner was going to be. I told them I did not know, it could be one of the guys from Jacka$$. I just read on here who it was. I have seen the other show he was on where he was doing the thumbs up, thumbs down crap. I looked at his website and noticed he lists that he is a member of the APA. Can a complaint be made on him with the APA? If so, I think we should do it. This guy ruins the professionalism most of us strive for as Polygraph Examiners. If I am out of line on here, please let me know and I'll stop. DerekP [This message has been edited by derekp (edited 01-23-2008).] IP: Logged |
stat Member
|
posted 01-23-2008 12:45 PM
Derek, I don't think you are out of line. The APA bylaws refers to polygraph testing for entertainment purposes as being subject to an ethical violation. If you go after Mr. Savastano, than you will also need to go after Jack Trimarrico and others who conduct polygraph tests on programs for dramatic effect----and of course officials would have to somehow draw distinctions between programs as some would argue that Dr. Phil is a serious program, but I would not (I do like the show though)I doubt anyone would dare serve warning to Jack, as his facial glare alone is enough to frighten away any would be censurers.
I am always interested in what Barry has to say, but on the surface, math and ethics are like cousins that only see each other at holidays and family deaths. The answer regarding what makes polygraph testing on entertainment shows as unethical is both an epistemic one and a moral one. It must be a clearly defined answer with no nebulous mathmatical presuppositions (polybabble). An opinion poll on this board would be nice.Essays are welcome, and spelling does not coutn.
[This message has been edited by stat (edited 01-23-2008).] IP: Logged |
skipwebb Member
|
posted 01-23-2008 03:55 PM
We'll have a hard sell trying to convince the public that it is wrong, morally or ethically for the polygraph to be shown on TV. We have a dozen cable channels showing actual police work being done 24 hours a day around the country. We have shows on autopsies, crime scene investigations and domestic arrests. You can watch doctors delivering babies and doing heart bypass surgery or lawyers at trial.I think the best we can do on this one is accept it for what it is, entertainment. We can stress that the procedures depicted on television are much like watching the fishing channel. If you watch it with the understanding that making a 30 minute show takes hours and hours of actual fishing to produce the 30 minute program, then you get it. If you think that the fishing expert on TV actually catches 20 bass in 30 minutes then it’s you who has the problem The viewing public watches car wrecks and people getting hit in the nuts for entertainment and loves it. How is this going to be different? IP: Logged |
Barry C Member
|
posted 01-23-2008 03:58 PM
It's different in that the car wrecks are car wrecks. This isn't polygraph. They might use one, but not for the purpose of learning the truth, which we can demonstrate mathematically.People will probably walk away from this believing in polygraph, but that doesn't make it right. IP: Logged |
rnelson Member
|
posted 01-23-2008 05:31 PM
Skip, I think you have a point. This might be one of those genies that doesn't go back in the bottle. If I had my 'druthers between an accurately portrayed polygraph on TV and some obviously dramatized and mythical theatrics, I might actually go for the drama. Most sex offenders I talk to are bothered by the show. ------------- Barry, I mostly agree with you. However, at present, our handscoring methods do not provide mathematical results. Our understanding of polygraph accuracy, for all of our present hand-scoring methods, is not statistical but empirical. What this means is that we define the method and do experiments to see the empirical (observed data) results. Then we describe the results in terms of percent correct. But that doesn't make it a mathematical test. Its a sorting (not mathematical) test, with an empirically understood classification accuracy rate. I'll go further and argue that the statistical meaning of most of our computer scoring is useless - because we don't know enough about how in the world those numbers are derived by our proprietary scoring models. If we don't know how we got the numbers, we don't really know what they mean stastically, except in terms of their empirical classification accuracy rates. That's OK, but it would be better to have both an empirical understanding of classification accuracy, and some idea of level of statistical significance for how well our data fit our known model. Even better, we'd like to have a pretty clear understanding of how we derived our known model. Only OSS-3 gives you all that, and can parse data from single issue, multi-facet, and mixed issue exams with two to four RQs and three to five charts.
A mathematical/statistical handscoring system would be like OSS-2, in which we had an empirical description of the normative data. That would be typically provided in the form of mean and variance parameters for the classification groups (i.e., truthful or deceptive persons), and would often be stratified across other dimensions such as age, gender, ethnicity, SES, level of functioning, or other functional parameters that have been identified as affecting the normative data. A statistical/mathematical test first defines the model (for truthful or deceptive scores) and then evaluates the degree or significance with which an individual score fits that model. That's how OSS-2 and OSS-3 work, and its how all manner of tests work in many diverse fields of science and testing. It would be possible to study and define normative parameters for our several handscoring systems, and then set cut-scores according to statistical models. Its about time someone did that. All it would require is a cohort of examiners to score a confirmed training sample that adequately represents the populations we intend to apply it to. The same sample could be scored by cohorts using Backster, Horizontal, Federal, Utah and whatever scoring systems. Having done that we could actually compare the meaning of one examiners +3 to another examiner's score. At present we cannot do that. We don't really even know the statistical meaning of +3 or -3. But wait, don't call yet, because if we did this we could then feed the scored data for each cohort to equation like Fleiss' kappa, and calculate estimates for interrater reliability for each of the handscoring systems. That would make our handscoring systems into statistical tests, and would begin to quell some of the assertions about unknown reliability and lack of scientific foundations. .02 r
------------------ "Gentlemen, you can't fight in here. This is the war room." --(Stanley Kubrick/Peter Sellers - Dr. Strangelove, 1964)
IP: Logged |
rnelson Member
|
posted 01-23-2008 07:28 PM
missed this one earlier quote: The answer is math. I've got to figure out how to do it correctly though.The people are asked 50 to 75 questions up front, so if all are single issue tests (no way), then you've got about a 90% chance of getting all correct, which means 5 will be errors. The question becomes, "What is the chance of picking 21 questions from those 50 without getting any errors?"
It doesn't actually matter how many you have to choose from. What matters is how many you have to choose (21). So .9 raised to the power of 21 = just over .1 But its not really that simple. The conceptual trick here is the same as in PCSOT mathematics - to understand the role of dependence and independence, because that determines whether we add or mulitiply our error rates according the addition-rule and multiplication-rule for dependent and independent probability events. In PSCOT, for example, we pick three high-base rate targets for a maintenance test (e.g., physical contact with a child, use of pornography, and masturbation to thoughts of children - only because they are reported at rather high base-rates during polygraph pretest interveiws), and arbitrarily set the rates at .5 (which might be realistic in some programs). So, all else being equal, we assume there is a .5 chance the examinee will be truthful to each question. Then, .5 raised to the power or questions, raised to the power of three questions (all else assumed equal) = .125. So you can assume that approximately 12 percent of sex offenders would be truthful to ALL of three high-base-rate targets. Now if you assume moderate base-rates of .15 (realistic) then you can combine three or four targets with an anticipated overall rate of truthfulness at roughly 50%. Conservative scoring/parsing rules prohibit split-calls or mixing SR and NSR results, so if ANY question produces an SR result, all bets are off with the other questions, regardless of their scored results , unless they too produce SR numbers. So, when ANY question is SR, we report INC to any question that is not SR. To achieve an NSR results, the examinee must pass ALL questions. If any question is INC, and none of the questions are SR, then the test is INC. This means that ANY INC response can also cause the examinee to not pass. "ANY" is an indicator of "dependence" for which we invoke the addition-rule. That means that if their is a probability of INC for each question, and any question can cause everything to be INC, then we must add the INC rate for the number of distinct questions. The result of the addition rule is to substantially inflate the likelihood of an INC test, the more questions we add. Or, in significance testing, it substantially inflates the specified alpha and the likelihood of a spurious result or type-1 error. This is an example of why laboratory researchers do not conduct multiple significance tests at the same time, and is exactly why we use statistical models based on ANOVA. Its also why knowledgeable researchers are aghast at some of our polygraph testing procedures, and its the reason behind the specially designed screening rules in OSS-3 (which uses an ANOVA procedure). We'd have to do the same thing with our error rate. Imagine, if you have a test that's 90% accurate for any question, and you then ask 10 or 11 questions. What do you think the likelihood is that any subject will experience an error? Yah, pretty high. quote:
In other words, I've got to answer a question truthfully, and the examiner has to find it truthful, but he's not going to do that every time since the test isn't perfect - and this one is far from perfect given the bad questions.That's where I'm not sure of the math (combining the probabilites, if appropriate). Anyhow, here goes: With the first question (asked on TV), you've got 45 out of 50 chance of asking a question the examiner got right, which means 90%. Then, you've got a 44 out of 49 chance of asking a question the examiner got right. When you get down to the 21st question, you've got 25 chances out of 30 remaining questions to ask a question the examiner got correct, which means a 16% chance of error. I don't know what the combined chances are though. (Hello Ray or Lou!)
Well, that depends, in part, on whether you are a Bayesian or inferential statistician. Also, even though we expect some errors they are not absolutes. So the error rate for each question might be the same if you are an inferential and might vary with the base-rate if you are a Bayesian. You are correct that the cumulative rate of error changes with the number of questions, but the error rate for individual questions is independent. (see above) quote: If you run a test that is 70% accurate, then you've got a 30% chance of error on question 21 (and again, all other questions were correct), which is not good.If you run a test that is 50% accurate, which is probably the case in this scenario - unless they are testing each person over a two week period - then the chance a question 21 being a good one (one to which the examinee told the truth and the examiner concluded truth) is only 17%, which means an 83% chance of error. My point: you almost certainly can't win if you tell the truth. I have ethical problems with the questions, let alone what this process might do to the contestants.
It makes more sense if we are clear about whether we are employing an inferential or bayesian model. This sounds more like we're testing the examiner, to see if he got it right. Is that how the game is played? Is the idea that the examinee has to tell the truth (presumably while denying some behavior), and the examiner has to conclude he is truthful? If so, then the game is a game of specificity. r
------------------ "Gentlemen, you can't fight in here. This is the war room." --(Stanley Kubrick/Peter Sellers - Dr. Strangelove, 1964)
IP: Logged |
stat Member
|
posted 01-23-2008 10:10 PM
I appreciate all the opinions, and all of you have changed my opinion in one day. I too might prefer the drama over real time borefest exams if I were a pedestrian. Barry was correct in the mantra of math being the savior, and Skip's anaologies were so on point----I am ALWAYS impressed with Skip. It is truly tragic that I have not flattered him with some photoshop work. Skip's comment about people liking to watch film of people getting their nuts cracked reminds me (and probably Ray also ) of the offensive but on-point movie "Idiocracy." It seems that this new show is a byproduct of our present day Idiocracy. sighI do expect an uptick in polygraph research and discovery by pedestrians surfing the web. By addressing the TV show by name on Anti, Antipolygraph already has search priority when you google the name of the show. So, I wish that some...
....type of person do some damage control over there per my theory of increased traffic-----much of which will be hit-jobs from journalists from opposing stations and or co-ops. Can't someone write a brief statement that codifies the ethical problems in layman's terms regarding polygraph in entertainment? Anybody? Or is it not necessary? $10 says George scores another network interview within the next 30 days. I further predict a two-fold surge of both interested passerbys and future polygraph haters. Maybe I am the fool.
[This message has been edited by stat (edited 01-23-2008).] IP: Logged |
Barry C Member
|
posted 01-23-2008 10:11 PM
quote: It makes more sense if we are clear about whether we are employing an inferential or bayesian model.
That's the problem: which is more appropriate? In any event, who or what we're looking at isn't all that important. My point is that these people almost certainly can't "win" regardless of how we approach it. Yes, many will believe the examiner's opinion and move passively towards a belief that polygraph works because, well, they want to believe it, but do we have an obligation to tell people that isn't real polygraph, and we don't do that stuff. Remember, others are going to view the show and come away concluding polygraph is intrusive and dangerous. I think our take should be to say what they're doing is garbage and it's irresponsible even if it's entertaining. We search for the truth because that's the right thing to do - whether it's identifying the guilty or freeing the innocent - but these people don't care about the truth, and they certainly don't care what effect it has on real polygraph examiners with a conscience who care about the truth. IP: Logged |
Taylor Member
|
posted 01-23-2008 11:07 PM
Well the first one got the boot for lying. They keep saying the contestants undergo vigorous polygraph testing. I wish Mr. Savastano or someone would come here and state what is the 'vigorous' testing...they can’t be doing specific issues on each question…must be a POT??? I would be curious to find out how much time is spent with each subject. Plus the questions are less than desirable…..think…fantasy. Most of the questions are just for shock value on the significant others and friends. I personally didn’t find it that entertaining. I can see lawsuits in the future with this show.With that said, I am with Skip. Can you imagine DeNiro explaining the polygraph to Greg Focker before he asked about porn? It would have ruined the scene. That movie was hilarious. I think we should just state it is entertainment and not how polygraph works in the real world. I will wait a day or two before posting at AP to see if we want an official plan of action....that is unless another sex offender posts….then I can’t control myself. [This message has been edited by Taylor (edited 01-23-2008).] IP: Logged |
rnelson Member
|
posted 01-24-2008 01:32 AM
Barry,Both methods are appropriate and useful. Bayesian methods are calculated by frequency (# of hits/misses), while inferential models depend on understanding variance, which require parametric assumptions of the data. Parametric assumption impose a lot more study and careful handling of the data. Nonparametric assumptions are simpler and in many ways more robust. Nonparametric models can be expected to offer less statistical power (increase likelihood of type-2 error or failing to find differences that do exist = INC), but no increase in type-1 error = FPs and FNs. Bayesian models are nonparametric, though we generally don't think of them that way (they are bayesian), because they do not require any understanding of the location and shape of a distribution of scores, only the frequency counts for hits/misses. But that's what makes them non-robust against other imfluences over frequency (base-rates). Parametric models employ calculations that are not based on simple frequency counts (averages and deviations), and are more robust against external influences on frequency (base rates). Nobody has pointed this out yet, but in polygraph handscoring we straddle we walk both sides of the fence with parametric assumuptions. On the nonparametric side we haven't really much studied the distributions (location and shape) of scores produced by different scoring systems and different techniques. Out of fairness, Barland started to do this in 1985, and Krapohl did again during 1999, but as a whole we've done very little with these possibilities. Instead we assign point according to visual, though measurable, differences in response size, without concern for the actually physiological relevance of our scale of measurement. Think about it, blood pressure is not measured in mm of paper, but mm of mercury. Electrical activity is also not measured in mm of paper or computer screen real-estate, but ohms and such. Respiration is not measured in linear segments but volume. Then we cross the parametric fence and start assigning integer values from +3 to -3, and completely violate the nonparametric requirements for uniformity. OSS-2 meets this requirements, and will produce as many 3s as 2s and 1s, but even though that is mathematically and procedurally correct, it bugs us human examiners because its counterintuitive and goes against our unfounded parametric assumptions around ratios that are set atheoretically and even arbitrarily. So we use ratios when we have no mathematical or theoretical basis for doing so. Now guess one of the reasons OSS-2 worked so well in its intended application; because it was built correctly, (i.e., with the correct assumptions around the uniform shape of the data = equal size bins). There are other reasons too. Anyway, we all know what happens when you try to walk both side of the fence - you get racked. APZ showed up at anti. He's a member since 6/1/2001 and has posted one time. Odds are this is Mr. Zelicoff, hawking his bayesian monte-carlo experiment, based on frequencies published by Honts in Kleiner 2002. Something about his results bugs me (aside from the fact that he makes it look bad), and I haven't taken the time to figure out what. For one thing, he calculates rather wide confidence intervals. Perhaps more importantly, he ran the monte-carlo with a single mode, and polygraph tests are actually bi-modal (we test for truthfulness and for deception). The difference will be that inconclusive rates get calculated as unequivocal errors and will suppress in PPV figures and NPV figures in unimodel calculations. There might be a bi-model example that is more realistic. I'm tempted to call bullshit on him. Also, Host reported the data for Patrick and Iacono's blind rescores of the RCMP data, and they had a high rate of INC for truthful. Then, those INCs get factored as DIs in the PPV calculations. I'm tempted to call bullshit on him. I'll have to think about this some more. He's not incorrect about his assertion that weighted averaging would have been more correct in Honts in Kleiner. r
------------------ "Gentlemen, you can't fight in here. This is the war room." --(Stanley Kubrick/Peter Sellers - Dr. Strangelove, 1964)
IP: Logged |
rcgilford Member
|
posted 01-24-2008 08:15 AM
Well, that didn't take long. Didn't even get into my office today and was asked if I had seen this show. I knew it was coming on, but didn't know it was last night. The guy that stopped me asked me if I had seen the show. I told him I did not, but explained to him that it was complete garbage and what the show depicts has nothing to do with how a polygraph works. Explained to him that the whole thing was for entertainmant only and was a fraudulent use of the polygraph. Most people who watch this show are going to think that is what polygraph is, and about all we can do as a profession is explain to them that it is total BS. Personally, I think any polygrapher who goes on a show like this should be bounced from the APA, regardless of who that person is.[This message has been edited by rcgilford (edited 01-24-2008).] IP: Logged |
blalock Member
|
posted 01-25-2008 08:49 AM
Well, I finally had a chance to review the Moment of Truth show that I Tivo'd. Not too happy about the concept of the show, I must say. However, that being said, I think that the show provides some very good material for Comparison Questions! Of the easier questions asked, a number of them would work quite well as CQs! Anybody notice this?Ben [This message has been edited by blalock (edited 01-25-2008).] IP: Logged |
Taylor Member
|
posted 01-25-2008 09:29 AM
Yes, that is why I commented the questions were less than desireable.Do you think...bad RQ but could be okay CQ Did you have a fantasy while at church...Should have an act associated with that fantasy..did he 'm' at church Did you keep your hands on a female longer than necessary...Just ask if he touched any of her sexual organs. Again, I would be interested to know what testing format was used for 50 questions and how much overall time is spent on each subject. IP: Logged |
stat Member
|
posted 01-25-2008 10:13 AM
I haven't really watched the show, aside from some painful clips.I suspect that the contestants are ran RI freestyle ---a sort of POT test (bastardized.) Just exactly what controls would you use against a relevant question of "As an underwear model, did you ever stuff your shorts with padding?" C'mon, those and other questions are "new ground" for polygraph and to run a CQT is rediculous. Also, he is using an analogue instrument----granted, it could be for effective reenactment, but the whole mess is smokey at best. I refuse to join an organization that allows such horse shit. The guy is making formal calls for Pete's sake, and he is ruining lives. I apologize for using Jack's name in the same breath. Jack runs real tests on real cases for Dr. Phil. This carnival crap is whole 'nother rodeo. To think I have wasted so much time being afraid of screwing up a test and being violated by APA for an accident or an experiment, when such monkey business is stamped acceptable.
[This message has been edited by stat (edited 01-25-2008).] IP: Logged |
Barry C Member
|
posted 01-25-2008 10:31 AM
That's why it is so important we identify and require examiners run validated techniques. This wouldn't qualify for reasons states, and then the guy could be dealt with appropriately.I wouldn't avoid the APA now. Instead I'd join and demand changes. The end result would benefit everyone. IP: Logged |
Barry C Member
|
posted 01-25-2008 10:34 AM
Stat,Why not consider drafting a resolution condemning this for a multitude of well articulated reasons (including the math that shows telling the truth is a sure way of losing) and we could sign on and post it in the public site, or better yet, maybe we could get the APA BOD to adopt the resolution and put it up on its site? I'll volunteer to do the grammar edits. IP: Logged |
stat Member
|
posted 01-25-2008 10:41 AM
Nice cheap shot at my grammer Barry(lol).I think it would be wise that the show be required to air a disclaimer from the APA stating that such tests are not afforded support for ethical practices by the American Polygraph Association. Such care is taken when people use products in an unethical way, and warnings are issued per requirements of many manufacturers i.e. The stunts that are about to be performed are not supported by the Department of Motor Vehicles, the Ford Auto Co, and /or any subsideraries, as the vehicle and driver are modified for and or specifically designed yadayadayada. 2 cents. This show is too compelling to go away anytime soon. I was in a computer class last night, and even my professor wouldn't stop talking about it since they have an examiner in the room. I just wanted to learn my stuff and leave. IP: Logged |
stat Member
|
posted 01-25-2008 10:48 AM
I believe the show would appreciate any press it could get----so caution would be wise. If the APA took a stand, the show's producers would likely relish the controversy. It could be a mistake, or it could be the best act of polygraph legitimacy since EPPA (a whole different debate.)I didn't re-up with the APA last summer, so I would be hard pressed to do any pro-bono work for them.
[This message has been edited by stat (edited 01-25-2008).] IP: Logged |
Barry C Member
|
posted 01-25-2008 10:50 AM
It's not just your grammar. I've used my red pen to help out many polygraph folk. If we put together something good, I'd bet we could get it adopted. APA leadership has already spoken out against it, but nothing has been formalized in writing - which some lawyers down the road might like have in their hands. IP: Logged |
Taylor Member
|
posted 01-30-2008 09:31 AM
In todays news I received the following:** Representatives for Drew Peterson contacted producers for FOX's lie-detector TV show "Moment of Truth" to suggest that he take a test on the air, FOX News confirmed on Tuesday. TMZ.com first reported that lawyers for Peterson, a suspect in the disappearance of his fourth wife, Stacy, asked the program's producers to administer a lie-detector test to their client on national TV. Stacy Peterson disappeared in October and is believed to be dead. Peterson is a prime suspect. He insists he is innocent and that she ran off with another man. His third wife, Kathleen Savio, also died under mysterious circumstances in 2004, when she was found in a bathtub. Her body was exhumed for further forensic analysis, and officials have reclassified her death as a "homicide staged to look like an accident." Drew Peterson repeatedly has refused requests from Stacy's family that he take a polygraph test but apparently has had a change of heart. ** Mr. Peterson must have taken a private exam to feel comfortable enough to want a 'live on-air' polygraph. I sure hope if he is tested the examiner has all the CM monitors and is well trained in CM detection. IP: Logged |
rnelson Member
|
posted 01-30-2008 10:14 AM
This would be a great opportunity for the APA Board to assert itself and issue a formal opinion/recommendation/position that the use of polygraph in media theatrics around criminal investigations is not recommended, and that if APA members choose to participate in such cases they should do so with the expectation of a reasonable degree of professional oversight and structure, including:- Examination shall be conducted with modern computerized polygraph equipment, including current technology for monitoring/recording of behavior or peripheral nervous system activity, distinct from the traditional sympathetic nervous system component sensors,
- Examination shall be conducted in a proper setting, without audience or distraction,
- Examination shall be continuously recorded in its entirety, by audio and video, synchronized with the physiological recordings, from the moment the examinee enters the examination laboratory to the moment he exists the laboratory.
- Examination shall be scored by the most reliable methods available, according to principles described in research, using measureable physiological features (vs. impressionistic unmeasureable features) that can be replicated through mechanical or automated measurement and statistical analysis of the physiological data,
- Examination shall be subject to Quality Assurance review by a panel of blind reviewers, to be appointed by the APA, prior to any release or presentation to news or entertainment media.
and finally, - APA member may face censure or other professional disciplinary action for failing to comply with the requirements as set forth.
and anything else I forgot. All it would take would be a draft, email-vote, or straw-poll from the APA, followed by a mailing or even emailing from the APA. This is a matter that affects the profession as a whole. Structure is a good thing. r
------------------ "Gentlemen, you can't fight in here. This is the war room." --(Stanley Kubrick/Peter Sellers - Dr. Strangelove, 1964)
[This message has been edited by rnelson (edited 01-30-2008).] IP: Logged |
Taylor Member
|
posted 01-31-2008 04:35 AM
New article on the Moment of Truth http://www.buddytv.com/articles/the-moment-of-truth/fox-in-defense-of-moment-of-tr-16191.aspx Although more than 23 million viewers stuck around after American Idol to witness the premiere episode of The Moment of Truth last Wednesday, the positive numbers doesn't stand for the approval and contentment of some spectators who now regard the show as a huge let down. Last week's episode featured contestants who are strapped to a polygraph to determine if they are telling the truth or not, with every truthful answer earning the contestant the chance to win a big cash prize. The first round featured a personal trainer named Ty, who was forced to leave the game show when the polygraph revealed he was lying about never having touched a female client inappropriately. The second contestant of the night was a guy named George, who admitted to a question about having sexual fantasies during Mass and being addicted to gambling. Your Take Quest said: I'm indifferent about this one. I admit it might reveal too much and it takes too long for that computer to... see all » Zoey1012011 said: ok i really dont like this show cuz it just relveales too much about people. like these questions can reall... see all » The controversial premise of the series, along with the debut episode's sluggish pacing, has earned much critical scorn. And while the outpouring attention may indicate a huge hit in the hands of Mike Darnell, FOX president of the Dark Alternative Programming, many are hoping that The Moment of Truth will deliver more appealing future installments. According to Darnell, who acknowledges viewers' complaints on the show, Moment of Truth's pacing will pick up, particularly once the show shifts to the 8pm timeslot in early March. "It's always been a semi-issue with the show because you have the pauses between the revelation and [the lie detector result]," Darnell explained. "You gotta have that to watch the reaction of the friends and family. But we're going to try to quicken the pace a little bit." "We intentionally opened with a middle-of-the-road episode," he added. "I didn't want people from middle America to freak out coming out of American Idol." For those who like to see a bunch of contestant spilling out their darkest secrets on national television for money, you can catch The Moment of Truth every Wednesday at 9/8c on FOX. If you click on the actual link there is a 'how do you feel about this show' question that you can participate in. As for Rays comments above and Erics '21 Century Examiner'....great ideas! I think we need to form a group and officially submit these ideas to APA. We can't wait for APA or AAPP to read this board. Although venting helps (us) out momentarily it accomplishes nothing in the long term if we don't take some action. Just my .02 cents worth. I am more than willing to sit on a committee with several of you and come up with solid plans to submit to both APA and AAPP
IP: Logged |
stat Member
|
posted 01-31-2008 09:46 AM
Thanks D T for the comments----my 21st cent---was a Jerry Maguire moment where I had diarrhea of the typing hands.I will volunteer to take on the task of moving to Hollywood and QCing those tests----sigh----someone has to take one for the team. Now, where's my Burmuda shorts, black socks, and camera? The original Brokeback MT??:
IP: Logged |
Barry C Member
|
posted 01-31-2008 10:42 AM
I'd gladly help with a document, but what's the focus: a condemnation of the show, or what should be done (as Ray suggested) to conduct an exam with a result one could put some faith in? The latter would seem to mean an acceleration of the new APA standards that aren't in full force until 2012.The doc could be two-fold too. IP: Logged |
stat Member
|
posted 01-31-2008 11:00 AM
A formal "Needs Analysis" needs to be written. It has 3 simple parts; 1. The staus quo with the problem2. The needs for change 3. The solution I think that Entertainment polygraph necessitates a higher standard and performance of practice than "street" polygraph testing. Think about it like (here comes another automobile analogy) how cars that perform are expected to have higher performance components, higher level of safety, and high level of operational skill----than say, the average daily car and driver.
IP: Logged |
stat Member
|
posted 01-31-2008 02:13 PM
George is reporting that the current examiner for the show "is joined by Pete Perrin of Lake Forest, California, a 2006-2007 member of the California Polygraph Association's board of directors:"I wonder if Pete Perrin also uses an analogue scratchbox as well as Nick. The "Polygraph Examiner of the 21st Century" does not use rustic, nostalgic, retro- gear.
IP: Logged |
Taylor Member
|
posted 02-25-2008 09:14 PM
FYI, Moment of Truth is on right now. I have intentionally not watched it since the first show. Today there was an enticement ad stating that this woman’s last boyfriend is going to ask her ‘If I wanted to get back with you would you leave your husband?’ What the hell kind of question is this! Intent???? My frustration keeps going up….. BTW, they just asked, ‘if you knew you wouldn’t be caught, would you steal money from your businesses. Intent????? Does anyone know if APA or any other professional organization is doing anything about this show? At the very least we could contact reporters and front load that these questions are CRAP!
IP: Logged |
blalock Member
|
posted 02-26-2008 09:13 AM
I tuned in for just a couple of minutes to "Moment of Truth." The contestant was only a couple of questions away from $200,000 when an interesting obstacle caused her to be eliminated. She apparently lied, when asked the following "relevant" question. The question was, "Do you think you are a good person?" She said, "Yes" on the show, and the mysterious, omnipotent voice declared that she was lying. Now, if this question is not a great Comparison Question, designed to elicit a response, I don't know what is... Too bad for the contestant...------------------ Ben blalockben@hotmail.com IP: Logged |
stat Member
|
posted 02-26-2008 09:27 AM
Damn right Ben!IP: Logged |
Taylor Member
|
posted 02-26-2008 10:11 AM
Exactly! BTW, this article came up this morning which is worth the read - shows us in a more positive light. http://www.columbian.com/news/localNews/2008/02/02252008_The-truth-about-the-polygraph.cfm IP: Logged |
sackett Moderator
|
posted 02-26-2008 11:26 AM
Donna/blalock,as soon as you posted I searched my TV, from Fox News mind you, and discovered we, here on the left coast time are an hour behind. No way of knowing what important news I missed.. Anywho, waited 45 minutes and saw the entire episode. I can't explain my frustration and anger at the presentation of polygraph that this BS show is causing me. Wife has to keep calming me down with cold beer... If someone comes up with a way to stifle these people, I'll join in. The problem is, of course, freedom of speech and stupidity in the name of entertainment. BTW, has any private examiners on this board started getting potential customers who want to know, "is my wife thinking about leaving me" type questions? Jim IP: Logged |
Taylor Member
|
posted 02-26-2008 04:20 PM
Now that you mention it I have had about 3 calls in the past week on fidelity exams....nothing scheduled, just asking questions. I normally get 2-3 calls a month. Normally I talk them out of it or my requirement of theraputic involvement makes them go elsewhere. I will start asking if they have seen the show....Jim, I hate to admit this - but I am glad I am not the only one frustrated with the media and having to consume cold beverages to calm myself down! Between the media and politics.... IP: Logged |
detector Administrator
|
posted 09-15-2008 05:20 PM
Hey Everyone,I was about to start a new topic, but chose to bump this one as it is a natural continuation. I heard that formal multi-member complaints were issued to APA and CAPE regarding Savastano's handling of the testing on moment of truth and that it violates association by-laws. If this is true, has anyone heard of any decisions? On another note, did this show and discussion ever lead to any serious consideration of 'public statements' or policy changes? In other words, is anything in the works? As you guys probably know from my last newsletter, I'd like to handle my article on this issue as well rounded as possible, so is there anything new that would make that article more complete, balanced and relevant? ------------------ Ralph Hilliard PolygraphPlace Owner & Operator Be sure to visit our new store for all things Polygraph Related http://store.polygraphplace.com IP: Logged | |